Working In Uncertainty

Innovating in the face of internal control regulations: Coping with internal control and risk management standards

by Matthew Leitch; first appeared on www.irmi.com in January 2004.

Contents


If you're involved in risk management you've probably noticed that more and more people want to tell you how to do it. New standards, regulations, statutes, and guidelines are emerging all the time, mostly driven by concerns about internal control.

Some of this is helpful. If an organisation is doing nothing and has no understanding of what it should be doing to manage its risks then the official guidance and regulations are a massive step forward.

However, at the other end of the scale, if you are experts and aiming to innovate for competitive advantage, then standards and regulations can get in the way. You want to tailor your approach to your organisation's unique requirements and try new ideas. Often it seems that the regulations would rather you did neither.

This article discusses the most common ways that standards and regulations currently are most at risk of stifling good innovations, and suggests ways for practitioners to uncover the hidden flexibility in official documents.

The common problem areas

Official requirements about risk management have often been driven by concerns about internal controls and so naturally reflect the favourite concepts and techniques of the external audit firms who have played a large role in writing them.

The theory is thin with no serious attempt at quantification. The focus is usually on bad things that might happen, with no room for upside risks. The actions required are mostly about evaluation and tend not to be about design of controls and planning for improvements that will be needed in future.

A common problem with technical guidance is that the advice assumes the items on a risk register are individual risks, when in practice virtually all risk register items are, and need to be, sets of risks. I'll explain the ramifications of this in the next section.

Finding the hidden flexibility

If you feel that official pronouncements are making it harder for you to do risk management in a progressive way don't give up. There's usually a lot more flexibility in official documents than first seems the case.

Firstly, a high proportion of their statements turn out to be no more than examples of how it could be done. Perhaps this happens because rule writers sensibly stop short of closing the door on valid alternatives. Look for phrases like ‘such as’, ‘sources of evidence should include’, and ‘the illustrative pro forma in appendix B’.

The words of the key rules tend to be correct even when examples used to illustrate their application are not.

Secondly, the rules almost never forbid other things you might be doing as well as meeting their requirements. Furthermore, they don't say their requirements are the most important thing you should be doing.

Thirdly, a lot of the baggage around a set of rules is not in the rules themselves but in common interpretation of them. Just because something is usually taken as having a certain meaning does not mean you have to go with that meaning, though of course it is much more difficult to argue the case for being different.

Some specifics

Ratings of probability and impact

You may want to rate risks in ways that seem different to that envisaged by the relevant official documents. You may want more or less quantification, or to be selective in some way.

It is common for rules to require risks to be evaluated by considering their probability and the impact if they occurred. The natural assumption is that each item on a risk register should have a rating for its probability, and another for its impact. This approach is often shown as an illustration of how it might be done.

However, risk register items are almost never individual risks. Instead they are, and need to be, sets of risks. It is illogical to make single ratings of probability and impact for whole sets of risks. If you really want to do it properly you need a probability distribution of impact.

Before you conclude that the rules are asking you to do something illogical check the wording of the requirements (as opposed to the examples). A phrase like ‘consider both probability and impact’ does not necessarily mean that they should be rated individually for every individual risk or every set of risks. It just means that probability and impact should be considered in some way.

It probably doesn't even say you have to consider those factors for every item. In practice there are many risk register items that are clearly key or clearly trivial and time consuming analysis would add nothing.

Finally, if you were to rate each risk register item (i.e. set of risks) for probability, and then for impact, would it meet the requirements? Is it possible that doing something meaningless would meet the requirement to consider probability and impact?

Risk appetite

Along with the probability x impact ratings one often sees a section about risk appetite. The idea is usually that risks with a high probability and high impact need action whereas lesser risks may not.

You may have your own ideas about how to do this better and, indeed, this risk appetite approach is not strictly correct because it fails to consider the scope for mitigation.

Fortunately, this is often reflected in the wording of the rules which typically make the risk ratings a guideline, not an absolute rule.

Risk factors

One useful technique that rarely comes up in official pronouncements on risk management and internal control is the use of risk factors. For example, when looking at a set of strategic initiatives to assess the risk of failure of each one it is helpful to look at factors which tend to drive risk of failure. If an initiative scores badly on every factor you should worry about its future, especially if the owner is saying that all is well.

I have yet to see an official document that mentions this technique, let alone rules it out.

Upside risks

Most risk management standards are happily progressive in that they accept upside risks as well as downside risks. However, they often fail to treat them properly and this can look like a problem if you are keen to integrate risk and potential opportunity management in one management process.

For example, in the draft COSO ERM guide, exposed for comment in 2003, upside risks could be identified but then had to be transferred into your strategy process and taken out of risk management. This was a definite statement, not just a suggestion or example.

At first glance this looks like a fatal blow to progressive risk management. However, suppose your strategy process had risk management integrated into it, with both upside and downside coverage? Your upside risks could be transferred in a conceptual sense and so would be outside the scope of COSO ERM, but they could still be on the same documents and discussed at the same meetings.

The top 10 risks

Some regulations call for a list or discussion of key risks. A list of about ten risks is usually considered appropriate. The problem is that risk register items are sets of risks, not individual risks. What makes it into the top ten depends partly on how aggregated each risk set is. This undermines the whole concept of a list of key risks.

Fortunately, regulations tend to recognise the difficulties of saying that a certain list of risks (or risk sets) is the top list. Even if the regulations you are dealing with do require this, check if this aspect of your list has to be externally audited. Probably not.

There are some logical approaches to describing your most important risks. One is to divide all the risks your organisation faces into about ten sets and discuss each set in your list of ‘top’ risks. Another is to find some basis for equating aggregation. This can be done by looking at the units already recognised by your management structure or in management meetings.

For example, suppose the organisation has 20 strategic initiatives ongoing and a meeting is held monthly to discuss them. It would make perfect sense to rate the risk of each initiative and allocate time in the meetings in accordance with riskiness. This doesn't help with comparing strategic initiatives with other sources of risk, but illustrates the principle.

Forward planning of internal control development

Regulations on internal control tend to read as if controls are improved only when a deficiency has been identified. Of course controls are also improved in advance of new needs, but that is not usually part of the official requirements.

If you want to focus attention on planning for internal control changes in advance the chances are that the regulations applying to your organisation don't even mention it, let alone rule it out.

Linear analysis

Finally, there's a subtle assumption in most risk standards and guides that seems so obviously sensible that it is easy to overlook its potentially damaging implications.

Risk management is usually portrayed as a linear process starting, perhaps, with objectives and moving on through stages like risk identification, risk evaluation, and so on.

In real life thinking is not so simple. We dart backwards and forwards along the analysis. Our objectives can be influenced by perceived risks. There are times when we don't have clear objectives but what we do know about our objectives has to be the starting point, with detail on objectives coming later. The best way to cut up the risk sets can be influenced by the structure of our internal controls. All this is sensible and desirable, but does not fit into the simple linear scheme.

If the official description seems at odds with the reality in your organisation, or you want to try alternative sequences, consider interpreting the guidance as a structure for documentation rather than as a literal procedure for thought.

Summary

Risk management is an exciting field with vast scope for innovation. We should not let standards, guidelines, and regulations prevent us from trying new things. In this article I've suggested ways to find more flexibility in official pronouncements.



Words © 2004 Matthew Leitch. First published January 2004.